Speed up flake8-async by telling libcst to skip deepcopy.#438
Speed up flake8-async by telling libcst to skip deepcopy.#438jakkdl merged 2 commits intopython-trio:mainfrom
Conversation
|
Good catch! This does seem to introduce a small footgun in development, but as far as I know none of the current visitors duplicates any nodes so it should indeed be perfectly safe now and I think the speedup is worth the minor future risk. |
|
Hey @yilei, it looks like that was the first time we merged one of your PRs! Thanks so much! 🎉 🎂 If you want to keep contributing, we'd love to have you. So, I just sent you an invitation to join the python-trio organization on Github! If you accept, then here's what will happen:
If you want to read more, here's the relevant section in our contributing guide. Alternatively, you're free to decline or ignore the invitation. You'll still be able to contribute as much or as little as you like, and I won't hassle you about joining again. But if you ever change your mind, just let us know and we'll send another invitation. We'd love to have you, but more importantly we want you to do whatever's best for you. If you have any questions, well... I am just a humble Python script, so I probably can't help. But please do post a comment here, or in our chat, or on our forum, whatever's easiest, and someone will help you out! |
Why? The default
deepcopyinlibcstguards against the same CST node object appearing at two positions in the tree (metadata is keyed by node identity). In our use of libcst in flake8-async, the parser output and the result of a prior.visit()never share nodes, so the copy is wasted work. This stays safe as long as no visitor returns a cached CST node from multipleleave_*calls.See
pyref.dev/libcst.metadata.MetadataWrapper.__init__Running the flake8-async tests on my machine now takes
29.81sinstead of38.95swith this change.